Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Inferring the intentions and emotions of others from behavior is crucial for social cognition. While neuroimaging studies have identified brain regions involved in social inference, it remains unknown whether performing social inference is an abstract computation that generalizes across different stimulus categories or is specific to certain stimulus domain. We recorded single-neuron activity from the medial temporal lobe (MTL) and the medial frontal cortex (MFC) in neurosurgical patients performing different types of inferences from images of faces, hands, and natural scenes. Our findings indicate distinct neuron populations in both regions encoding inference type for social (faces, hands) and nonsocial (scenes) stimuli, while stimulus category was itself represented in a task-general manner. Uniquely in the MTL, social inference type was represented by separate subsets of neurons for faces and hands, suggesting a domain-specific representation. These results reveal evidence for specialized social inference processes in the MTL, in which inference representations were entangled with stimulus type as expected from a domain-specific process.more » « lessFree, publicly-accessible full text available December 6, 2025
-
Abstract We present a multimodal dataset of intracranial recordings, fMRI, and eye tracking in 20 participants during movie watching. Recordings consist of single neurons, local field potential, and intracranial EEG activity acquired from depth electrodes targeting the amygdala, hippocampus, and medial frontal cortex implanted for monitoring of epileptic seizures. Participants watched an 8-min long excerpt from the video “Bang! You’re Dead” and performed a recognition memory test for movie content. 3 T fMRI activity was recorded prior to surgery in 11 of these participants while performing the same task. This NWB- and BIDS-formatted dataset includes spike times, field potential activity, behavior, eye tracking, electrode locations, demographics, and functional and structural MRI scans. For technical validation, we provide signal quality metrics, assess eye tracking quality, behavior, the tuning of cells and high-frequency broadband power field potentials to familiarity and event boundaries, and show brain-wide inter-subject correlations for fMRI. This dataset will facilitate the investigation of brain activity during movie watching, recognition memory, and the neural basis of the fMRI-BOLD signal.more » « lessFree, publicly-accessible full text available December 1, 2025
-
null (Ed.)Abstract People readily (but often inaccurately) attribute traits to others based on faces. While the details of attributions depend on the language available to describe social traits, psychological theories argue that two or three dimensions (such as valence and dominance) summarize social trait attributions from faces. However, prior work has used only a small number of trait words (12 to 18), limiting conclusions to date. In two large-scale, preregistered studies we ask participants to rate 100 faces (obtained from existing face stimuli sets), using a list of 100 English trait words that we derived using deep neural network analysis of words that have been used by other participants in prior studies to describe faces. In study 1 we find that these attributions are best described by four psychological dimensions, which we interpret as “warmth”, “competence”, “femininity”, and “youth”. In study 2 we partially reproduce these four dimensions using the same stimuli among additional participant raters from multiple regions around the world, in both aggregated and individual-level data. These results provide a comprehensive characterization of trait attributions from faces, although we note our conclusions are limited by the scope of our study (in particular we note only white faces and English trait words were included).more » « less
-
null (Ed.)Abstract People spontaneously infer other people’s psychology from faces, encompassing inferences of their affective states, cognitive states, and stable traits such as personality. These judgments are known to be often invalid, but nonetheless bias many social decisions. Their importance and ubiquity have made them popular targets for automated prediction using deep convolutional neural networks (DCNNs). Here, we investigated the applicability of this approach: how well does it generalize, and what biases does it introduce? We compared three distinct sets of features (from a face identification DCNN, an object recognition DCNN, and using facial geometry), and tested their prediction across multiple out-of-sample datasets. Across judgments and datasets, features from both pre-trained DCNNs provided better predictions than did facial geometry. However, predictions using object recognition DCNN features were not robust to superficial cues (e.g., color and hair style). Importantly, predictions using face identification DCNN features were not specific: models trained to predict one social judgment (e.g., trustworthiness) also significantly predicted other social judgments (e.g., femininity and criminal), and at an even higher accuracy in some cases than predicting the judgment of interest (e.g., trustworthiness). Models trained to predict affective states (e.g., happy) also significantly predicted judgments of stable traits (e.g., sociable), and vice versa. Our analysis pipeline not only provides a flexible and efficient framework for predicting affective and social judgments from faces but also highlights the dangers of such automated predictions: correlated but unintended judgments can drive the predictions of the intended judgments.more » « less
-
Decision-making in complex environments relies on flexibly using prior experience. This process depends on the medial frontal cortex (MFC) and the medial temporal lobe, but it remains unknown how these structures implement selective memory retrieval. We recorded single neurons in the MFC, amygdala, and hippocampus while human subjects switched between making recognition memory–based and categorization-based decisions. The MFC rapidly implemented changing task demands by using different subspaces of neural activity and by representing the currently relevant task goal. Choices requiring memory retrieval selectively engaged phase-locking of MFC neurons to amygdala and hippocampus field potentials, thereby enabling the routing of memories. These findings reveal a mechanism for flexibly and selectively engaging memory retrieval and show that memory-based choices are preferentially represented in the frontal cortex when required.more » « less
An official website of the United States government
